35,375 research outputs found
Empirical Evaluation of Mutation-based Test Prioritization Techniques
We propose a new test case prioritization technique that combines both
mutation-based and diversity-based approaches. Our diversity-aware
mutation-based technique relies on the notion of mutant distinguishment, which
aims to distinguish one mutant's behavior from another, rather than from the
original program. We empirically investigate the relative cost and
effectiveness of the mutation-based prioritization techniques (i.e., using both
the traditional mutant kill and the proposed mutant distinguishment) with 352
real faults and 553,477 developer-written test cases. The empirical evaluation
considers both the traditional and the diversity-aware mutation criteria in
various settings: single-objective greedy, hybrid, and multi-objective
optimization. The results show that there is no single dominant technique
across all the studied faults. To this end, \rev{we we show when and the reason
why each one of the mutation-based prioritization criteria performs poorly,
using a graphical model called Mutant Distinguishment Graph (MDG) that
demonstrates the distribution of the fault detecting test cases with respect to
mutant kills and distinguishment
An Optimization-based approach for vaccine prioritization
An effective vaccine prioritization process is essential to prevent the many issues that currently weaken global vaccination efforts. Identifying challenges associated with vaccine development is important when considering which initiatives will provide immunization that is effective, affordable, and easy to administer. The process of establishing priorities for vaccine development is complicated, though, by the conflicting interests of multiple stakeholders involved in the vaccine market. Additionally, uncertainties exist regarding: (1) the resources and time required for vaccine development, (2) the expected benefits of development, and (3) the anticipated demand for vaccination, further complicating the prioritization process. This study proposes a decision-support tool for prioritizing vaccine initiatives through the use of mathematical optimization models. The tool allows a panel of decision makers to assess vaccine candidates over multiple criteria with information that is both quantitative and qualitative. This assessment is the result of a methodology that integrates Data Envelopment Analysis and the Analytic Hierarchy Process. The decision-support tool could be used by researchers and funding agencies to determine which vaccine initiatives should be: more effective, affordable, profitable, reliable, easier to use and store, and more suitable to the needs of multiple populations from diverse locations and having multiple logistic needs
Forecasting Recharging Demand to Integrate Electric Vehicle Fleets in Smart Grids
Electric vehicle fleets and smart grids are two growing technologies. These technologies
provided new possibilities to reduce pollution and increase energy efficiency.
In this sense, electric vehicles are used as mobile loads in the power grid. A distributed
charging prioritization methodology is proposed in this paper. The solution is based
on the concept of virtual power plants and the usage of evolutionary computation
algorithms. Additionally, the comparison of several evolutionary algorithms, genetic
algorithm, genetic algorithm with evolution control, particle swarm optimization, and
hybrid solution are shown in order to evaluate the proposed architecture. The proposed
solution is presented to prevent the overload of the power grid
Squeaky Wheel Optimization
We describe a general approach to optimization which we term `Squeaky Wheel'
Optimization (SWO). In SWO, a greedy algorithm is used to construct a solution
which is then analyzed to find the trouble spots, i.e., those elements, that,
if improved, are likely to improve the objective function score. The results of
the analysis are used to generate new priorities that determine the order in
which the greedy algorithm constructs the next solution. This
Construct/Analyze/Prioritize cycle continues until some limit is reached, or an
acceptable solution is found. SWO can be viewed as operating on two search
spaces: solutions and prioritizations. Successive solutions are only indirectly
related, via the re-prioritization that results from analyzing the prior
solution. Similarly, successive prioritizations are generated by constructing
and analyzing solutions. This `coupled search' has some interesting properties,
which we discuss. We report encouraging experimental results on two domains,
scheduling problems that arise in fiber-optic cable manufacturing, and graph
coloring problems. The fact that these domains are very different supports our
claim that SWO is a general technique for optimization
Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration
Testing in Continuous Integration (CI) involves test case prioritization,
selection, and execution at each cycle. Selecting the most promising test cases
to detect bugs is hard if there are uncertainties on the impact of committed
code changes or, if traceability links between code and tests are not
available. This paper introduces Retecs, a new method for automatically
learning test case selection and prioritization in CI with the goal to minimize
the round-trip time between code commits and developer feedback on failed test
cases. The Retecs method uses reinforcement learning to select and prioritize
test cases according to their duration, previous last execution and failure
history. In a constantly changing environment, where new test cases are created
and obsolete test cases are deleted, the Retecs method learns to prioritize
error-prone test cases higher under guidance of a reward function and by
observing previous CI cycles. By applying Retecs on data extracted from three
industrial case studies, we show for the first time that reinforcement learning
enables fruitful automatic adaptive test case selection and prioritization in
CI and regression testing.Comment: Spieker, H., Gotlieb, A., Marijan, D., & Mossige, M. (2017).
Reinforcement Learning for Automatic Test Case Prioritization and Selection
in Continuous Integration. In Proceedings of 26th International Symposium on
Software Testing and Analysis (ISSTA'17) (pp. 12--22). AC
Development of a variance prioritized multiresponse robust design framework for quality improvement
Robust design is a well-known quality improvement method that focuses on building quality into the design of products and services. Yet, most well established robust design models only consider a single performance measure and their prioritization schemes do not always address the inherent goal of robust design. This paper aims to propose a new robust design method for multiple quality characteristics where the goal is to first reduce the variability of the system under investigation and then attempt to locate the mean at the desired target value. The paper investigates the use of a response surface approach and a sequential optimization strategy to create a flexible and structured method for modeling multiresponse problems in the context of robust design. Nonlinear programming is used as an optimization tool. The proposed methodology is demonstrated through a numerical example. The results obtained from this example are compared to that of the traditional robust design method. For comparison purposes, the traditional robust design optimization models are reformulated within the nonlinear programming framework developed here. The proposed methodology provides enhanced optimal robust design solutions consistently. This paper is perhaps the first study on the prioritized response robust design with the consideration of multiple quality characteristics. The findings and key observations of this paper will be of significant value to the quality and reliability engineering/management community
Scalable Robust Kidney Exchange
In barter exchanges, participants directly trade their endowed goods in a
constrained economic setting without money. Transactions in barter exchanges
are often facilitated via a central clearinghouse that must match participants
even in the face of uncertainty---over participants, existence and quality of
potential trades, and so on. Leveraging robust combinatorial optimization
techniques, we address uncertainty in kidney exchange, a real-world barter
market where patients swap (in)compatible paired donors. We provide two
scalable robust methods to handle two distinct types of uncertainty in kidney
exchange---over the quality and the existence of a potential match. The latter
case directly addresses a weakness in all stochastic-optimization-based methods
to the kidney exchange clearing problem, which all necessarily require explicit
estimates of the probability of a transaction existing---a still-unsolved
problem in this nascent market. We also propose a novel, scalable kidney
exchange formulation that eliminates the need for an exponential-time
constraint generation process in competing formulations, maintains provable
optimality, and serves as a subsolver for our robust approach. For each type of
uncertainty we demonstrate the benefits of robustness on real data from a
large, fielded kidney exchange in the United States. We conclude by drawing
parallels between robustness and notions of fairness in the kidney exchange
setting.Comment: Presented at AAAI1
Lightweight Blockchain Framework for Location-aware Peer-to-Peer Energy Trading
Peer-to-Peer (P2P) energy trading can facilitate integration of a large
number of small-scale producers and consumers into energy markets.
Decentralized management of these new market participants is challenging in
terms of market settlement, participant reputation and consideration of grid
constraints. This paper proposes a blockchain-enabled framework for P2P energy
trading among producer and consumer agents in a smart grid. A fully
decentralized market settlement mechanism is designed, which does not rely on a
centralized entity to settle the market and encourages producers and consumers
to negotiate on energy trading with their nearby agents truthfully. To this
end, the electrical distance of agents is considered in the pricing mechanism
to encourage agents to trade with their neighboring agents. In addition, a
reputation factor is considered for each agent, reflecting its past performance
in delivering the committed energy. Before starting the negotiation, agents
select their trading partners based on their preferences over the reputation
and proximity of the trading partners. An Anonymous Proof of Location (A-PoL)
algorithm is proposed that allows agents to prove their location without
revealing their real identity. The practicality of the proposed framework is
illustrated through several case studies, and its security and privacy are
analyzed in detail
- …